12 research outputs found

    Agreement Study Using Gesture Description Analysis

    Get PDF
    Choosing adequate gestures for touchless interfaces is a challenging task that has a direct impact on human-computer interaction. Such gestures are commonly determined by the designer, ad-hoc, rule-based or agreement-based methods. Previous approaches to assess agreement grouped the gestures into equivalence classes and ignored the integral properties that are shared between them. In this work, we propose a generalized framework that inherently incorporates the gesture descriptors into the agreement analysis (GDA). In contrast to previous approaches, we represent gestures using binary description vectors and allow them to be partially similar. In this context, we introduce a new metric referred to as Soft Agreement Rate (SAR) to measure the level of agreement and provide a mathematical justification for this metric. Further, we performed computational experiments to study the behavior of SAR and demonstrate that existing agreement metrics are a special case of our approach. Our method was evaluated and tested through a guessability study conducted with a group of neurosurgeons. Nevertheless, our formulation can be applied to any other user-elicitation study. Results show that the level of agreement obtained by SAR is 2.64 times higher than the previous metrics. Finally, we show that our approach complements the existing agreement techniques by generating an artificial lexicon based on the most agreed properties

    Augmented Reality Future Step Visualization for Robust Surgical Telementoring

    Get PDF
    Introduction Surgical telementoring connects expert mentors with trainees performing urgent care in austere environments. However, such environments impose unreliable network quality, with significant latency and low bandwidth. We have developed an augmented reality telementoring system that includes future step visualization of the medical procedure. Pregenerated video instructions of the procedure are dynamically overlaid onto the trainee's view of the operating field when the network connection with a mentor is unreliable. Methods Our future step visualization uses a tablet suspended above the patient's body, through which the trainee views the operating field. Before trainee use, an expert records a “future library” of step-by-step video footage of the operation. Videos are displayed to the trainee as semitransparent graphical overlays. We conducted a study where participants completed a cricothyroidotomy under telementored guidance. Participants used one of two telementoring conditions: conventional telestrator or our system with future step visualization. During the operation, the connection between trainee and mentor was bandwidth throttled. Recorded metrics were idle time ratio, recall error, and task performance. Results Participants in the future step visualization condition had 48% smaller idle time ratio (14.5% vs. 27.9%, P < 0.001), 26% less recall error (119 vs. 161, P = 0.042), and 10% higher task performance scores (rater 1 = 90.83 vs. 81.88, P = 0.008; rater 2 = 88.54 vs. 79.17, P = 0.042) than participants in the telestrator condition. Conclusions Future step visualization in surgical telementoring is an important fallback mechanism when trainee/mentor network connection is poor, and it is a key step towards semiautonomous and then completely mentor-free medical assistance systems

    Pose Imitation Constraints for Kinematic Structures

    No full text
    Robot usage has increased in different areas of society and human work, including medicine, transportation, education, space exploration, and the service industry. This phenomenon has generated a sudden enthusiasm to develop more intelligent robots, better equipped to perform tasks in a manner that is at least as good as those completed by humans. Such jobs require human involvement as operators or teammates since robots struggle with automation in everyday settings. Soon, the role of humans will be far beyond users or stakeholders and include those responsible for training such robots. A popular teaching form is to allow robots to mimic human behavior. This method is intuitive and natural and does not require specialized robotics knowledge. While there are other methods for robots to complete tasks effectively, collaborative tasks require mutual understanding and coordination that is best achieved through mimicking human motion. This mimicking problem has been tackled through skill imitation, which reproduces human-like motion during a task shown by a teacher. Skill imitation builds on faithfully replicating the human pose and requires two steps. First, an experts demonstration is captured and pre-processed, and motion features are obtained. Then, a learning algorithm is used to optimize for the task. The learning algorithms are often paired with traditional control systems to transfer the demonstration to the robot successfully. However, this methodology currently faces a generalization issue: most solutions are formulated for specific robots or tasks. The lack of generalization presents a problem, especially as the frequency at which robots are replaced and improved in collaborative environments is much higher than in traditional manufacturing. Similarly to humans, we expect robots to have more than one skill and the same skills to be completed by more than one type of robot. Thus, we address this issue by proposing a human motion imitation framework that can be efficiently computed and generalized for different kinematic structures (e.g., different robots). To develop this framework, we train an algorithm to augment collaborative demonstrations, facilitating the generalization to unseen scenarios. Then, we create a model for pose imitation that converts human motion to a flexible constraint space. This space can be directly mapped to different kinematic structures by specifying a correspondence between the main human joints (i.e., shoulder, elbow, wrist) and robot joints. This model permits having an unlimited number of robotic links between two assigned human joints, allowing different robots to mimic the demonstrated task and human pose. Last, we incorporate the constraint model into a reward that informs a Reinforcement Learning algorithm during optimization. We tested the proposed methodology in different collaborative scenarios. We assessed the task success rate, pose imitation accuracy, the occlusion that the robot produces in the environment, the number of collisions, and finally, the learning efficiency of the algorithm. The results show that the proposed framework creates effective collaboration in different robots and tasks

    Consensus measured by the Metric I (<i>State of the art</i>) and Metric II (<i>The Jaccard distance using semantic descriptors</i>).

    No full text
    <p>Consensus measured by the Metric I (<i>State of the art</i>) and Metric II (<i>The Jaccard distance using semantic descriptors</i>).</p

    This form contains a list of 34 commands.

    No full text
    <p>Each command is highlighted in gray. The rectangle at the left of the command corresponds to the context of the gesture and the 2-4 rectangles to the right correspond to the modifiers.</p
    corecore